A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
-bash/zsh: mdadm command not not found # Windows (WSL2) sudo apt-get update sudo apt-get install mdadm # Debian apt-get install mdadm # Ubuntu apt-get install mdadm # Alpine apk add mdadm # Arch Linux pacman -S mdadm # Kali Linux apt-get install mdadm # CentOS yum install mdadm # Fedora dnf install mdadm # Raspbian apt-get install mdadm # Dockerfile dockerfile.run/mdadm # Docker docker run cmd.cat/mdadm mdadm
mdadm 取代了之前所有用于管理 RAID 阵列的工具。它几乎管理 RAID 的所有用户空间部分。虽然有一些操作需要通过写入 /proc 文件系统来完成,但操作并不多。mdadm 有七种模式。
mdadm [mode] <raiddevice> [options] <component-devices>
mdadm [mode] <raiddevice> [options] <component-devices>
mdadm has several major modes of operation: Assemble Assemble the components of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information so as to assemble a faulty array. Build Build an array that doesn't have per-device metadata (superblocks). For these sorts of arrays, mdadm cannot differentiate between initial creation and subsequent assembly of an array. It also cannot perform any checks that appropriate components have been requested. Because of this, the Build mode should only be used together with a complete understanding of what you are doing. Create Create a new array with per-device metadata (superblocks). Appropriate metadata is written to each device, and then the array comprising those devices is activated. A 'resync' process is started to make sure that the array is consistent (e.g. both sides of a mirror contain the same data) but the content of the device is left otherwise untouched. The array can be used as soon as it has been created. There is no need to wait for the initial resync to finish. Follow or Monitor Monitor one or more md devices and act on any state changes. This is only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays, as only these have interesting state. RAID0 or Linear never have missing, spare, or failed drives, so there is nothing to monitor. Grow Grow (or shrink) an array, or otherwise reshape it in some way. Currently supported growth options including changing the active size of component devices and changing the number of active devices in Linear and RAID levels 0/1/4/5/6, changing the RAID level between 0, 1, 5, and 6, and between 0 and 10, changing the chunk size and layout for RAID 0,4,5,6,10 as well as adding or removing a write-intent bitmap and changing the array's consistency policy. Incremental Assembly Add a single device to an appropriate array. If the addition of the device makes the array runnable, the array will be started. This provides a convenient interface to a hot-plug system. As each device is detected, mdadm has a chance to include it in some array as appropriate. Optionally, when the --fail flag is passed in we will remove the device from any active array instead of adding it. If a CONTAINER is passed to mdadm in this mode, then any arrays within that container will be assembled and started. Manage This is for doing things to specific components of an array such as adding new spares and removing faulty devices. Misc This is an 'everything else' mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information-gathering operations. Auto-detect This mode does not act on a specific device or array, but rather it requests the Linux Kernel to activate any auto-detected arrays.
Options for selecting a mode are: -A, --assemble Assemble a pre-existing array. -B, --build Build a legacy array without superblocks. -C, --create Create a new array. -F, --follow, --monitor Select Monitor mode. -G, --grow Change the size or shape of an active array. -I, --incremental Add/remove a single device to/from an appropriate array, and possibly start the array. --auto-detect Request that the kernel starts any auto-detected arrays. This can only work if md is compiled into the kernel — not if it is a module. Arrays can be auto-detected by the kernel if all the components are in primary MS-DOS partitions with partition type FD, and all use v0.90 metadata. In-kernel autodetect is not recommended for new installations. Using mdadm to detect and assemble arrays — possibly in an initrd — is substantially more flexible and should be preferred. If a device is given before any options, or if the first option is one of --add, --re-add, --add-spare, --fail, --remove, or --replace, then the MANAGE mode is assumed. Anything other than these will cause the Misc mode to be assumed. Options that are not mode-specific are: -h, --help Display a general help message or, after one of the above options, a mode-specific help message. --help-options Display more detailed help about command-line parsing and some commonly used options. -V, --version Print version information for mdadm. -v, --verbose Be more verbose about what is happening. This can be used twice to be extra-verbose. The extra verbosity currently only affects --detail --scan and --examine --scan. -q, --quiet Avoid printing purely informative messages. With this, mdadm will be silent unless there is something really important to report. -f, --force Be more forceful about certain operations. See the various modes for the exact meaning of this option in different contexts. -c, --config= Specify the config file or directory. If not specified, the default config file and default conf.d directory will be used. See mdadm.conf(5) for more details. If the config file given is partitions then nothing will be read, but mdadm will act as though the config file contained exactly DEVICE partitions containers and will read /proc/partitions to find a list of devices to scan, and /proc/mdstat to find a list of containers to examine. If the word none is given for the config file, then mdadm will act as though the config file were empty. If the name given is of a directory, then mdadm will collect all the files contained in the directory with a name ending in .conf, sort them lexically, and process all of those files as config files. -s, --scan Scan config file or /proc/mdstat for missing information. In general, this option gives mdadm permission to get any missing information (like component devices, array devices, array identities, and alert destination) from the configuration file (see previous option); one exception is MISC mode when using --detail or --stop, in which case --scan says to get a list of array devices from /proc/mdstat. -e, --metadata= Declare the style of RAID metadata (superblock) to be used. The default is 1.2 for --create, and to guess for other operations. The default can be overridden by setting the metadata value for the CREATE keyword in mdadm.conf. Options are: 0, 0.90 Use the original 0.90 format superblock. This format limits arrays to 28 component devices and limits component devices of levels 1 and greater to 2 terabytes. It is also possible for there to be confusion about whether the superblock applies to a whole device or just the last partition, if that partition starts on a 64K boundary. 1, 1.0, 1.1, 1.2 default Use the new version-1 format superblock. This has fewer restrictions. It can easily be moved between hosts with different endian-ness, and a recovery operation can be checkpointed and restarted. The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2". ddf Use the "Industry Standard" DDF (Disk Data Format) format defined by SNIA. DDF is deprecated and there is no active development around it. When creating a DDF array a CONTAINER will be created, and normal arrays can be created in that container. imsm Use the Intel(R) Matrix Storage Manager metadata format. This creates a CONTAINER which is managed in a similar manner to DDF, and is supported by an option-rom on some platforms: https://www.intel.com/content/www/us/en/support/products/122484 --homehost= This will override any HOMEHOST setting in the config file and provides the identity of the host which should be considered the home for any arrays. When creating an array, the homehost will be recorded in the metadata. For version-1 superblocks, it will be prefixed to the array name. For version-0.90 superblocks, part of the SHA1 hash of the hostname will be stored in the latter half of the UUID. When reporting information about an array, any array which is tagged for the given homehost will be reported as such. When using Auto-Assemble, only arrays tagged for the given homehost will be allowed to use 'local' names (i.e. not ending in '_' followed by a digit string). See below under Auto-Assembly. The special name "any" can be used as a wild card. If an array is created with --homehost=any then the name "any" will be stored in the array and it can be assembled in the same way on any host. If an array is assembled with this option, then the homehost recorded on the array will be ignored. --prefer= When mdadm needs to print the name for a device it normally finds the name in /dev which refers to the device and is the shortest. When a path component is given with --prefer mdadm will prefer a longer name if it contains that component. For example --prefer=by-uuid will prefer a name in a subdirectory of /dev called by-uuid. This functionality is currently only provided by --detail and --monitor. --home-cluster= specifies the cluster name for the md device. The md device can be assembled only on the cluster which matches the name specified. If this option is not provided, mdadm tries to detect the cluster name automatically. For create, build, or grow: -n, --raid-devices= Specify the number of active devices in the array. This, plus the number of spare devices (see below) must equal the number of component-devices (including "missing" devices) that are listed on the command line for --create. Setting a value of 1 is probably a mistake and so requires that --force be specified first. A value of 1 will then be allowed for linear, multipath, RAID0 and RAID1. It is never allowed for RAID4, RAID5 or RAID6. This number can only be changed using --grow for RAID1, RAID4, RAID5 and RAID6 arrays. -x, --spare-devices= Specify the number of spare (eXtra) devices in the initial array. Spares can also be added and removed later. The number of component devices listed on the command line must equal the number of RAID devices plus the number of spare devices. -z, --size= Amount (in Kilobytes) of space to use from each drive in RAID levels 1/4/5/6/10 and for RAID 0 on external metadata. This must be a multiple of the chunk size, and must leave about 128Kb of space at the end of the drive for the RAID superblock. If this is not specified (as it normally is not) the smallest drive (or partition) sets the size, though if there is a variance among the drives of greater than 1%, a warning is issued. A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes, Megabytes, Gigabytes or Terabytes respectively. Sometimes a replacement drive can be a little smaller than the original drives though this should be minimised by IDEMA standards. Such a replacement drive will be rejected by md. To guard against this it can be useful to set the initial size slightly smaller than the smaller device with the aim that it will still be larger than any replacement. This option can be used with --create for determining the initial size of an array. For external metadata, it can be used on a volume, but not on a container itself. Setting the initial size of RAID 0 array is only valid for external metadata. This value can be set with --grow for RAID level 1/4/5/6/10 though DDF arrays may not be able to support this. RAID 0 array size cannot be changed. If the array was created with a size smaller than the currently active drives, the extra space can be accessed using --grow. The size can be given as max which means to choose the largest size that fits on all current drives. Before reducing the size of the array (with --grow --size=) you should make sure that space isn't needed. If the device holds a filesystem, you would need to resize the filesystem to use less space. After reducing the array size you should check that the data stored in the device is still available. If the device holds a filesystem, then an 'fsck' of the filesystem is a minimum requirement. If there are problems the array can be made bigger again with no loss with another --grow --size= command. -Z, --array-size= This is only meaningful with --grow and its effect is not persistent: when the array is stopped and restarted the default array size will be restored. Setting the array-size causes the array to appear smaller to programs that access the data. This is particularly needed before reshaping an array so that it will be smaller. As the reshape is not reversible, but setting the size with --array-size is, it is required that the array size is reduced as appropriate before the number of devices in the array is reduced. Before reducing the size of the array you should make sure that space isn't needed. If the device holds a filesystem, you would need to resize the filesystem to use less space. After reducing the array size you should check that the data stored in the device is still available. If the device holds a filesystem, then an 'fsck' of the filesystem is a minimum requirement. If there are problems the array can be made bigger again with no loss with another --grow --array-size= command. A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes, Megabytes, Gigabytes or Terabytes respectively. A value of max restores the apparent size of the array to be whatever the real amount of available space is. Clustered arrays do not support this parameter yet. -c, --chunk= Specify chunk size in kilobytes. The default when creating an array is 512KB. To ensure compatibility with earlier versions, the default when building an array with no persistent metadata is 64KB. This is only meaningful for RAID0, RAID4, RAID5, RAID6, and RAID10. RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a power of 2, with minimal chunk size being 4KB. A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes, Megabytes, Gigabytes or Terabytes respectively. --rounding= Specify the rounding factor for a Linear array. The size of each component will be rounded down to a multiple of this size. This is a synonym for --chunk but highlights the different meaning for Linear as compared to other RAID levels. The default is 0K (i.e. no rounding). -l, --level= Set RAID level. When used with --create, options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container. Obviously some of these are synonymous. When a CONTAINER metadata type is requested, only the container level is permitted, and it does not need to be explicitly given. When used with --build, only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid. Can be used with --grow to change the RAID level in some cases. See LEVEL CHANGES below. -p, --layout= This option configures the fine details of data layout for RAID5, RAID6, and RAID10 arrays, and controls the failure modes for faulty. It can also be used for working around a kernel bug with RAID0, but generally doesn't need to be used explicitly. The layout of the RAID5 parity block can be one of left-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs. The default is left-symmetric. It is also possible to cause RAID5 to use a RAID4-like layout by choosing parity-first, or parity-last. Finally for RAID5 there are DDF-compatible layouts, ddf-zero-restart, ddf-N-restart, and ddf-N-continue. These same layouts are available for RAID6. There are also 4 layouts that will provide an intermediate stage for converting between RAID5 and RAID6. These provide a layout which is identical to the corresponding RAID5 layout on the first N-1 devices, and has the 'Q' syndrome (the second 'parity' block used by RAID6) on the last device. These layouts are: left-symmetric-6, right-symmetric-6, left-asymmetric-6, right-asymmetric-6, and parity-first-6. When setting the failure mode for level faulty, the options are: write-transient, wt, read-transient, rt, write-persistent, wp, read-persistent, rp, write-all, read-fixable, rf, clear, flush, none. Each failure mode can be followed by a number, which is used as a period between fault generation. Without a number, the fault is generated once on the first relevant request. With a number, the fault will be generated after that many requests, and will continue to be generated every time the period elapses. Multiple failure modes can be current simultaneously by using the --grow option to set subsequent failure modes. "clear" or "none" will remove any pending or periodic failure modes, and "flush" will clear any persistent faults. The layout options for RAID10 are one of 'n', 'o' or 'f' followed by a small number signifying the number of copies of each datablock. The default is 'n2'. The supported options are: 'n' signals 'near' copies. Multiple copies of one data block are at similar offsets in different devices. 'o' signals 'offset' copies. Rather than the chunks being duplicated within a stripe, whole stripes are duplicated but are rotated by one device so duplicate blocks are on different devices. Thus subsequent copies of a block are in the next drive, and are one chunk further down. 'f' signals 'far' copies (multiple copies have very different offsets). See md(4) for more detail about 'near', 'offset', and 'far'. As for the number of copies of each data block, 2 is normal, 3 can be useful. This number can be at most equal to the number of devices in the array. It does not need to divide evenly into that number (e.g. it is perfectly legal to have an 'n2' layout for an array with an odd number of devices). A bug introduced in Linux 3.14 means that RAID0 arrays with devices of differing sizes started using a different layout. This could lead to data corruption. Since Linux 5.4 (and various stable releases that received backports), the kernel will not accept such an array unless a layout is explicitly set. It can be set to 'original' or 'alternate'. When creating a new array, mdadm will select 'original' by default, so the layout does not normally need to be set. An array created for either 'original' or 'alternate' will not be recognized by an (unpatched) kernel prior to 5.4. To create a RAID0 array with devices of differing sizes that can be used on an older kernel, you can set the layout to 'dangerous'. This will use whichever layout the running kernel supports, so the data on the array may become corrupt when changing kernel from pre-3.14 to a later kernel. When an array is converted between RAID5 and RAID6 an intermediate RAID6 layout is used in which the second parity block (Q) is always on the last device. To convert a RAID5 to RAID6 and leave it in this new layout (which does not require re-striping) use --layout=preserve. This will try to avoid any restriping. The converse of this is --layout=normalise which will change a non-standard RAID6 layout into a more standard arrangement. --parity= same as --layout (thus explaining the p of -p). -b, --bitmap= Specify how to store a write-intent bitmap. Following values are supported: internal - the bitmap is stored with the metadata on the array and so is replicated on all devices. clustered - the array is created for a clustered environment. One bitmap is created for each node as defined by the --nodes parameter and are stored internally. none - create array with no bitmap or remove any present bitmap (grow mode). Setting bitmap for file is deprecated and should not be used. The file should not exist unless --force is also given. The same file should be provided when assembling the array. The file name must contain at least one slash ('/'). Bitmap files are only known to work on ext2 and ext3. Storing bitmap files on other filesystems may result in serious problems. When creating an array on devices which are 100G or larger, mdadm automatically adds an internal bitmap as it will usually be beneficial. This can be suppressed with --bitmap=none or by selecting a different consistency policy with --consistency-policy. --bitmap-chunk= Set the chunk size of the bitmap. Each bit corresponds to that many Kilobytes of storage. When using a file-based bitmap, the default is to use the smallest size that is at least 4 and requires no more than 2ˆ21 chunks. When using an internal bitmap, the chunk size defaults to 64Meg, or larger if necessary to fit the bitmap into the available space. A suffix of 'K', 'M', 'G' or 'T' can be given to indicate Kilobytes, Megabytes, Gigabytes or Terabytes respectively. -W, --write-mostly subsequent devices listed in a --build, --create, or --add command will be flagged as 'write-mostly'. This is valid for RAID1 only and means that the 'md' driver will avoid reading from these devices if at all possible. This can be useful if mirroring over a slow link. --write-behind= Specify that write-behind mode should be enabled (valid for RAID1 only). If an argument is specified, it will set the maximum number of outstanding writes allowed. The default value is 256. A write-intent bitmap is required in order to use write-behind mode, and write-behind is only attempted on drives marked as write-mostly. --failfast subsequent devices listed in a --create or --add command will be flagged as 'failfast'. This is valid for RAID1 and RAID10 only. IO requests to these devices will be encouraged to fail quickly rather than cause long delays due to error handling. Also no attempt is made to repair a read error on these devices. If an array becomes degraded so that the 'failfast' device is the only usable device, the 'failfast' flag will then be ignored and extended delays will be preferred to complete failure. The 'failfast' flag is appropriate for storage arrays which have a low probability of true failure, but which may sometimes cause unacceptable delays due to internal maintenance functions. --assume-clean Tell mdadm that the array pre-existed and is known to be clean. It can be useful when trying to recover from a major failure as you can be sure that no data will be affected unless you actually write to the array. It can also be used when creating a RAID1 or RAID10 if you want to avoid the initial resync, however this practice — while normally safe — is not recommended. Use this only if you really know what you are doing. When the devices that will be part of a new array were filled with zeros before creation the operator knows the array is actually clean. If that is the case, such as after running badblocks, this argument can be used to tell mdadm the facts the operator knows. When an array is resized to a larger size with --grow --size= the new space is normally resynced in that same way that the whole array is resynced at creation. --assume-clean can be used with that command to avoid the automatic resync. --write-zeroes When creating an array, send write zeroes requests to all the block devices. This should zero the data area on all disks such that the initial sync is not necessary and, if successful, will behave as if --assume-clean was specified. This is intended for use with devices that have hardware offload for zeroing, but despite this zeroing can still take several minutes for large disks. Thus a message is printed before and after zeroing and each disk is zeroed in parallel with the others. This is only meaningful with --create. --backup-file= This is needed when --grow is used to increase the number of raid devices in a RAID5 or RAID6 if there are no spare devices available, or to shrink, change RAID level or layout. See the GROW MODE section below on RAID-DEVICES CHANGES. The file must be stored on a separate device, not on the RAID array being reshaped. --data-offset= Arrays with 1.x metadata can leave a gap between the start of the device and the start of array data. This gap can be used for various metadata. The start of data is known as the data-offset. Normally an appropriate data offset is computed automatically. However it can be useful to set it explicitly such as when re- creating an array which was originally created using a different version of mdadm which computed a different offset. Setting the offset explicitly over-rides the default. The value given is in Kilobytes unless a suffix of 'K', 'M', 'G' or 'T' is used to explicitly indicate Kilobytes, Megabytes, Gigabytes or Terabytes respectively. --data-offset can also be used with --grow for some RAID levels (initially on RAID10). This allows the data-offset to be changed as part of the reshape process. When the data offset is changed, no backup file is required as the difference in offsets is used to provide the same functionality. When the new offset is earlier than the old offset, the number of devices in the array cannot shrink. When it is after the old offset, the number of devices in the array cannot increase. When creating an array, --data-offset can be specified as variable. In the case each member device is expected to have an offset appended to the name, separated by a colon. This makes it possible to recreate exactly an array which has varying data offsets (as can happen when different versions of mdadm are used to add different devices). --continue This option is complementary to the --freeze-reshape option for assembly. It is needed when --grow operation is interrupted and it is not restarted automatically due to --freeze-reshape usage during array assembly. This option is used together with -G , ( --grow ) command and device for a pending reshape to be continued. All parameters required for reshape continuation will be read from array metadata. If initial --grow command had required --backup-file= option to be set, continuation option will require to have exactly the same backup file given as well. Any other parameter passed together with --continue option will be ignored. -N, --name= Set a name for the array. It must be POSIX PORTABLE NAME compatible and cannot be longer than 32 chars. This is effective when creating an array with a v1 metadata, or an external array. If name is needed but not specified, it is taken from the basename of the device that is being created. See DEVICE NAMES -R, --run Insist that mdadm run the array, even if some of the components appear to be active in another array or filesystem. Normally mdadm will ask for confirmation before including such components in an array. This option causes that question to be suppressed. -f, --force Insist that mdadm accept the geometry and layout specified without question. Normally mdadm will not allow the creation of an array with only one device, and will try to create a RAID5 array with one missing drive (as this makes the initial resync work faster). With --force, mdadm will not try to be so clever. -o, --readonly Start the array read only rather than read-write as normal. No writes will be allowed to the array, and no resync, recovery, or reshape will be started. It works with Create, Assemble, Manage and Misc mode. -a, --add This option can be used in Grow mode in two cases. If the target array is a Linear array, then --add can be used to add one or more devices to the array. They are simply catenated on to the end of the array. Once added, the devices cannot be removed. If the --raid-disks option is being used to increase the number of devices in an array, then --add can be used to add some extra devices to be included in the array. In most cases this is not needed as the extra devices can be added as spares first, and then the number of raid disks can be changed. However, for RAID0 it is not possible to add spares. So to increase the number of devices in a RAID0, it is necessary to set the new number of devices, and to add the new devices, in the same command. --nodes Only works when the array is created for a clustered environment. It specifies the maximum number of nodes in the cluster that will use this device simultaneously. If not specified, this defaults to 4. --write-journal Specify journal device for the RAID-4/5/6 array. The journal device should be an SSD with a reasonable lifetime. -k, --consistency-policy= Specify how the array maintains consistency in the case of an unexpected shutdown. Only relevant for RAID levels with redundancy. Currently supported options are: resync Full resync is performed and all redundancy is regenerated when the array is started after an unclean shutdown. bitmap Resync assisted by a write-intent bitmap. Implicitly selected when using --bitmap. journal For RAID levels 4/5/6, the journal device is used to log transactions and replay after an unclean shutdown. Implicitly selected when using --write-journal. ppl For RAID5 only, Partial Parity Log is used to close the write hole and eliminate resync. PPL is stored in the metadata region of RAID member drives, no additional journal drive is needed. Can be used with --grow to change the consistency policy of an active array in some cases. See CONSISTENCY POLICY CHANGES below. For assemble: -u, --uuid= uuid of array to assemble. Devices which don't have this uuid are excluded -m, --super-minor= Minor number of device that array was created for. Devices which don't have this minor number are excluded. If you create an array as /dev/md1, then all superblocks will contain the minor number 1, even if the array is later assembled as /dev/md2. Giving the literal word "dev" for --super-minor will cause mdadm to use the minor number of the md device that is being assembled. e.g. when assembling /dev/md0, --super-minor=dev will look for super blocks with a minor number of 0. --super-minor is only relevant for v0.90 metadata, and should not normally be used. Using --uuid is much safer. -N, --name= Specify the name of the array to assemble. It must be POSIX PORTABLE NAME compatible and cannot be longer than 32 chars. This must be the name that was specified when creating the array. It must either match the name stored in the superblock exactly, or it must match with the current homehost prefixed to the start of the given name. -f, --force Assemble the array even if the metadata on some devices appears to be out-of-date. If mdadm cannot find enough working devices to start the array, but can find some devices that are recorded as having failed, then it will mark those devices as working so that the array can be started. This works only for native. For external metadata it allows to start dirty degraded RAID 4, 5, 6. An array which requires --force to be started may contain data corruption. Use it carefully. -R, --run Attempt to start the array even if fewer drives were given than were present last time the array was active. Normally if not all the expected drives are found and --scan is not used, then the array will be assembled but not started. With --run an attempt will be made to start it anyway. --no-degraded This is the reverse of --run in that it inhibits the startup of array unless all expected drives are present. This is only needed with --scan, and can be used if the physical connections to devices are not as reliable as you would like. -b, --bitmap= Specify the bitmap file that was given when the array was created. If an array has an internal bitmap, there is no need to specify this when assembling the array. --backup-file= If --backup-file was used while reshaping an array (e.g. changing number of devices or chunk size) and the system crashed during the critical section, then the same --backup-file must be presented to --assemble to allow possibly corrupted data to be restored, and the reshape to be completed. --invalid-backup If the file needed for the above option is not available for any reason an empty file can be given together with this option to indicate that the backup file is invalid. In this case the data that was being rearranged at the time of the crash could be irrecoverably lost, but the rest of the array may still be recoverable. This option should only be used as a last resort if there is no way to recover the backup file. -U, --update= Update the superblock on each device while assembling the array. The argument given to this flag can be one of summaries, uuid, name, nodes, homehost, home-cluster, resync, byteorder, devicesize, no-bitmap, bbl, no-bbl, ppl, no-ppl, layout-original, layout-alternate, layout-unspecified, metadata, or super-minor. The super-minor option will update the preferred minor field on each superblock to match the minor number of the array being assembled. This can be useful if --examine reports a different "Preferred Minor" to --detail. In some cases this update will be performed automatically by the kernel driver. In particular, the update happens automatically at the first write to an array with redundancy (RAID level 1 or greater). The uuid option will change the uuid of the array. If a UUID is given with the --uuid option that UUID will be used as a new UUID and will NOT be used to help identify the devices in the array. If no --uuid is given, a random UUID is chosen. The name option will change the name of the array as stored in the superblock. This is only supported for version-1 superblocks. The nodes option will change the nodes of the array as stored in the bitmap superblock. This option only works for a clustered environment. The homehost option will change the homehost as recorded in the superblock. For version-0 superblocks, this is the same as updating the UUID. For version-1 superblocks, this involves updating the name. The home-cluster option will change the cluster name as recorded in the superblock and bitmap. This option only works for a clustered environment. The resync option will cause the array to be marked dirty meaning that any redundancy in the array (e.g. parity for RAID5, copies for RAID1) may be incorrect. This will cause the RAID system to perform a "resync" pass to make sure that all redundant information is correct. The byteorder option allows arrays to be moved between machines with different byte-order, such as from a big-endian machine like a Sparc or some MIPS machines, to a little-endian x86_64 machine. When assembling such an array for the first time after a move, giving --update=byteorder will cause mdadm to expect superblocks to have their byteorder reversed, and will correct that order before assembling the array. This is only valid with original (Version 0.90) superblocks. The summaries option will correct the summaries in the superblock. That is the counts of total, working, active, failed, and spare devices. The devicesize option will rarely be of use. It applies to version 1.1 and 1.2 metadata only (where the metadata is at the start of the device) and is only useful when the component device has changed size (typically become larger). The version 1 metadata records the amount of the device that can be used to store data, so if a device in a version 1.1 or 1.2 array becomes larger, the metadata will still be visible, but the extra space will not. In this case it might be useful to assemble the array with --update=devicesize. This will cause mdadm to determine the maximum usable amount of space on each device and update the relevant field in the metadata. The metadata option only works on v0.90 metadata arrays and will convert them to v1.0 metadata. The array must not be dirty (i.e. it must not need a sync) and it must not have a write-intent bitmap. The old metadata will remain on the devices, but will appear older than the new metadata and so will usually be ignored. The old metadata (or indeed the new metadata) can be removed by giving the appropriate --metadata= option to --zero-superblock. The no-bitmap option can be used when an array has an internal bitmap which is corrupt in some way so that assembling the array normally fails. It will cause any internal bitmap to be ignored. The bbl option will reserve space in each device for a bad block list. This will be 4K in size and positioned near the end of any free space between the superblock and the data. The no-bbl option will cause any reservation of space for a bad block list to be removed. If the bad block list contains entries, this will fail, as removing the list could cause data corruption. The ppl option will enable PPL for a RAID5 array and reserve space for PPL on each device. There must be enough free space between the data and superblock and a write-intent bitmap or journal must not be used. The no-ppl option will disable PPL in the superblock. The layout-original and layout-alternate options are for RAID0 arrays with non-uniform devices size that were in use before Linux 5.4. If the array was being used with Linux 3.13 or earlier, then to assemble the array on a new kernel, --update=layout-original must be given. If the array was created and used with a kernel from Linux 3.14 to Linux 5.3, then --update=layout-alternate must be given. This only needs to be given once. Subsequent assembly of the array will happen normally. For more information, see md(4). The layout-unspecified option reverts the effect of layout-orignal or layout-alternate and allows the array to be again used on a kernel prior to Linux 5.3. This option should be used with great caution. --freeze-reshape This option is intended to be used in start-up scripts during the initrd boot phase. When the array under reshape is assembled during the initrd phase, this option stops the reshape after the reshape-critical section has been restored. This happens before the file system pivot operation and avoids loss of filesystem context. Losing file system context would cause reshape to be broken. Reshape can be continued later using the --continue option for the grow command. For Manage mode: -t, --test Unless a more serious error occurred, mdadm will exit with a status of 2 if no changes were made to the array and 0 if at least one change was made. This can be useful when an indirect specifier such as missing, detached or faulty is used in requesting an operation on the array. --test will report failure if these specifiers didn't find any match. -a, --add hot-add listed devices. If a device appears to have recently been part of the array (possibly it failed or was removed) the device is re-added as described in the next point. If that fails or the device was never part of the array, the device is added as a hot- spare. If the array is degraded, it will immediately start to rebuild data onto that spare. Note that this and the following options are only meaningful on array with redundancy. They don't apply to RAID0 or Linear. --re-add re-add a device that was previously removed from an array. If the metadata on the device reports that it is a member of the array, and the slot that it used is still vacant, then the device will be added back to the array in the same position. This will normally cause the data for that device to be recovered. However, based on the event count on the device, the recovery may only require sections that are flagged by a write-intent bitmap to be recovered or may not require any recovery at all. When used on an array that has no metadata (i.e. it was built with --build) it will be assumed that bitmap-based recovery is enough to make the device fully consistent with the array. --re-add can also be accompanied by --update=devicesize, --update=bbl, or --update=no-bbl. See descriptions of these options when used in Assemble mode for an explanation of their use. If the device name given is missing then mdadm will try to find any device that looks like it should be part of the array but isn't and will try to re-add all such devices. If the device name given is faulty then mdadm will find all devices in the array that are marked faulty, remove them and attempt to immediately re-add them. This can be useful if you are certain that the reason for failure has been resolved. --add-spare Add a device as a spare. This is similar to --add except that it does not attempt --re-add first. The device will be added as a spare even if it looks like it could be a recent member of the array. -r, --remove remove listed devices. They must not be active. i.e. they should be failed or spare devices. As well as the name of a device file (e.g. /dev/sda1) the words failed, detached and names like set-A can be given to --remove. The first causes all failed devices to be removed. The second causes any device which is no longer connected to the system (i.e an 'open' returns ENXIO) to be removed. The third will remove a set as described below under --fail. -f, --fail Mark listed devices as faulty. As well as the name of a device file, the word detached or a set name like set-A can be given. The former will cause any device that has been detached from the system to be marked as failed. It can then be removed. For RAID10 arrays where the number of copies evenly divides the number of devices, the devices can be conceptually divided into sets where each set contains a single complete copy of the data on the array. Sometimes a RAID10 array will be configured so that these sets are on separate controllers. In this case, all the devices in one set can be failed by giving a name like set-A or set-B to --fail. The appropriate set names are reported by --detail. --set-faulty same as --fail. --replace Mark listed devices as requiring replacement. As soon as a spare is available, it will be rebuilt and will replace the marked device. This is similar to marking a device as faulty, but the device remains in service during the recovery process to increase resilience against multiple failures. When the replacement process finishes, the replaced device will be marked as faulty. --with This can follow a list of --replace devices. The devices listed after --with will preferentially be used to replace the devices listed after --replace. These devices must already be spare devices in the array. --write-mostly Subsequent devices that are added or re-added will have the 'write-mostly' flag set. This is only valid for RAID1 and means that the 'md' driver will avoid reading from these devices if possible. --readwrite Subsequent devices that are added or re-added will have the 'write-mostly' flag cleared. --cluster-confirm Confirm the existence of the device. This is issued in response to an --add request by a node in a cluster. When a node adds a device it sends a message to all nodes in the cluster to look for a device with a UUID. This translates to a udev notification with the UUID of the device to be added and the slot number. The receiving node must acknowledge this message with --cluster-confirm. Valid arguments are <slot>:<devicename> in case the device is found or <slot>:missing in case the device is not found. --add-journal Add a journal to an existing array, or recreate journal for a RAID-4/5/6 array that lost a journal device. To avoid interrupting ongoing write operations, --add-journal only works for array in Read-Only state. --failfast Subsequent devices that are added or re-added will have the 'failfast' flag set. This is only valid for RAID1 and RAID10 and means that the 'md' driver will avoid long timeouts on error handling where possible. --nofailfast Subsequent devices that are re-added will be re-added without the 'failfast' flag set.
mdadm 创建 RAID 数组:
sudo mdadm --create /dev/md/MyRAID --level raid_level --raid-devices number_of_disks /dev/sdXN
mdadm 停止 RAID 数组:
sudo mdadm --stop /dev/md0
mdadm 将磁盘标记为故障:
sudo mdadm --fail /dev/md0 /dev/sdXN
mdadm 移除磁盘:
sudo mdadm --remove /dev/md0 /dev/sdXN
mdadm 将磁盘添加到阵列:
sudo mdadm --assemble /dev/md0 /dev/sdXN
mdadm 显示 RAID 信息:
sudo mdadm --detail /dev/md0
mdadm 通过删除 RAID 元数据重置磁盘:
sudo mdadm --zero-superblock /dev/sdXN